翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

generalized least squares : ウィキペディア英語版
generalized least squares

In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model. GLS can be used to perform linear regression when there is a certain degree of correlation between the explanatory variables (independent variables) of the regression. In these cases, ordinary least squares and weighted least squares can be statistically inefficient, or even give misleading inferences. GLS was first described by Alexander Aitken in 1934.
== Method outline ==
In a typical linear regression model we observe data \_ on ''n'' statistical units. The response values are placed in a vector ''Y'' = (''y''1, ..., ''y''''n'')′, and the predictor values are placed in the design matrix ''X'' = ''x''''ij'', where ''x''''ij'' is the value of the ''j''th predictor variable for the ''i''th unit. The model assumes that the conditional mean of ''Y'' given ''X'' is a linear function of ''X'', whereas the conditional variance of the error term given ''X'' is a ''known'' matrix Ω. This is usually written as
:
Y = X\beta + \varepsilon, \qquad \mathrm()=0,\ \operatorname()=\Omega.

Here ''β'' is a vector of unknown “regression coefficients” that must be estimated from the data.
Suppose ''b'' is a candidate estimate for ''β''. Then the residual vector for ''b'' will be ''Y'' − ''Xb''. Generalized least squares method estimates ''β'' by minimizing the squared Mahalanobis length of this residual vector:
:
\hat\beta = \underset\,(Y-Xb)'\,\Omega^(Y-Xb),

Since the objective is a quadratic form in ''b'', the estimator has an explicit formula:
:
\hat\beta = (X'\Omega^X)^ X'\Omega^Y.


抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「generalized least squares」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.